This is an introduction to explaining machine learning models with Shapley values. Shapley values are a widely used approach from cooperative game theory that come with desirable properties. This tutorial is designed to help build a solid understanding of how to compute and interpet Shapley-based explanations of machine learning models. We will take a practical hands-on approach, and learn by example using the shap
Python package to explain progressively more complex models. This is a living document, and the serves as an introduction to the shap
Python package. So if you have feedback or contributions please open an issue or pull request to make this tutorial better!
Note this document is not yet complete, so there will be a lot missing still.
Outline
Before using Shapley values to explain complicated models, it is helpful to understand how they work for simple models. One of the simplest model types is standard linear regression, and so below we train a linear regression model on the classic boston housing dataset. This dataset consists of 506 neighboorhood regions around Boston in 1978, where our goal is to predict the median home price (in thousands) in each neighboorhood from 14 different features:
In [1]:
import shap
import sklearn
# a classic housing price dataset
X,y = shap.datasets.boston()
# a simple linear model
model = sklearn.linear_model.LinearRegression()
model.fit(X, y)
Out[1]:
In [2]:
print("Model coefficients:\n")
for i in range(X.shape[1]):
print(X.columns[i], "=", model.coef_[i].round(4))
While coefficents are great for telling us what will happen when we change the value of an input feature, by themselves they are not a great way to measure the overall importance of a feature. This is because the value of each coeffient depends on the scale of the input features. If for example we were to measure the age of a home in minutes instead of years, then the coeffiect for the AGE feature would become $0.0007 * 365 * 24 * 60 = 367.92$. Clearly the number of minutes since a house was built is not more important than the number of years, yet it's coeffiecent value is much larger. This means that the magnitude of a coeffient is not nessecarily a good measure of a feature's importance in a linear model.
To understand a feature's importance in a model it is necessary to understand both how changing that feature impacts the model's output, and also the distribution of that feature's values. To visualize this for a linear model we can build a classical partial dependence plot and show the distribution of feature values as a histogram on the x-axis:
In [3]:
%config InlineBackend.figure_format = 'retina'
shap.partial_dependence_plot("AGE", model.predict, X, model_expected_value=True, feature_expected_value=True)
The gray horizontal line in the plot above represents the expected value of the model when applied to the boston housing dataset. The vertical gray line represents the average value of the AGE feature. Note that the blue partial dependence plot line (which the is average value of the model output when we fix the AGE feature to a given value) always passes through the interesection of the two gray expected value lines. We can consider this intersection point as the "center" of the partial dependence plot with respect to the data distribution. The impact of this centering will become clear when we turn to Shapley values next.
The core idea behind Shapley value based explanations of machine learning models is to use fair allocation results from cooperative game theory to allocate credit for a model's output $f(x)$ among its input features [cite]. In order to connect game theory with machine learning models it is nessecary to both match a model's input features with players in a game, and also match the model function with the rules of the game. Since in game theory a player can join or not join a game, we need a way for a feature to "join" or "not join" a model. The most common way to define what it means for a feature to "join" a model is to say that feature has "joined a model" when we know the value of that feature, and it has not joined a model when we don't know the value of that feature. To evaluate an existing model $f$ when only a subset $S$ of features are part of the model we integrate out the other features using a conditional expectated value formulation. This formulation can take two forms:
$$ E[f(X) \mid X_S = x_S] $$In the first form assumes that we know the values of the features in S because we observe them. In the second form we know the values of the features in S because we set them. In general the second form is usually preferable, both becuase it tells us how the model would behave if we were to intervene and change its inputs, and also because it is much easier to compute. For a much more in depth discussion on the differences between these two formulations see the spearate article on causal vs observational feature importances [TODO]. For the purposes of these tutorial we will focus entirely on the the second formulation. We will also use the more specific term SHAP values to refer to Shapley values applied to a conditional expectation function of a machine learning model.
SHAP values can be very complicated to compute (they are NP-hard in general), but linear models are so simple that we can read the SHAP values right off a partial dependence plot. When we are explaining a prediction $f(x)$, the SHAP value for a specific feature $i$ is just the difference between the expected model output and the partial dependence plot at the feature's value $x_i$:
In [4]:
# compute the SHAP values for the linear model
explainer = shap.LinearExplainer(model, X)
shap_values = explainer.shap_values(X)
# make a standard partial dependence plot
sample_ind = 18
fig,ax = shap.partial_dependence_plot(
"AGE", model.predict, X, model_expected_value=True,
feature_expected_value=True, show=False,
shap_values=shap_values[sample_ind:sample_ind+1,:],
shap_value_features=X.iloc[sample_ind:sample_ind+1,:]
)
The close correspondence between the classic partial dependence plot and SHAP values means that if we plot the SHAP value for a specific feature across a whole dataset we will exactly trace out a mean centered version of the partial dependence plot for that feature:
In [5]:
shap.dependence_plot("AGE", shap_values, X, interaction_index=None)
One the fundemental properties of Shapley values is that they always sum up to the difference between the game outcome when all players are present and the game outcome when no players are present. For machine learning models this means that SHAP values of all the input features will always sum up to the difference between expected model output and the current model output for the prediction being explained. The easiest way to see this is through a waterfall plot that starts our background prior expectation for a home price $E[f(X)]$, and then adds features one at a time until we reach the current model output $f(x)$:
In [6]:
# the waterfall_plot shows how we get from explainer.expected_value to model.predict(X)[sample_ind]
shap.waterfall_plot(explainer.expected_value, shap_values[sample_ind], X.iloc[sample_ind], max_display=14)
The reason the partial dependence plots of linear models have such a close connection to SHAP values is because each feature in the model is handled independently of every other feature (the effects are just added together). We can keep this additive nature while relaxing the linear requirement of straight lines. This results in the well-known class of generalized additive models (GAMs). While there are many ways to train these types of models, we will use the easy approach of depth-1 gradient boosted decision trees in XGBoost (for slightly slower but more accurate alternative see InterpretMLs explainable boosting machines).
In [11]:
# fit a GAM model to the data
import interpret.glassbox
model_ebm = interpret.glassbox.ExplainableBoostingRegressor()
model_ebm.fit(X, y)
# explain the GAM model with SHAP
explainer_ebm = shap.AdditiveExplainer(model_ebm, X)
shap_values_ebm = explainer_ebm.shap_values(X)
# make a standard partial dependence plot with a single SHAP value overlaid
fig,ax = shap.partial_dependence_plot(
"AGE", model_ebm.predict, X, model_expected_value=True,
feature_expected_value=True, show=False,
shap_values=shap_values_ebm[sample_ind:sample_ind+1,:],
shap_value_features=X.iloc[sample_ind:sample_ind+1,:]
)
In [14]:
# the waterfall_plot shows how we get from explainer.expected_value to model.predict(X)[sample_ind]
shap.waterfall_plot(explainer.expected_value, shap_values_ebm[sample_ind], X.iloc[sample_ind], max_display=14)
In [27]:
# the waterfall_plot shows how we get from explainer.expected_value to model.predict(X)[sample_ind]
shap.summary_plot(shap_values_ebm, X, max_display=14)
In [ ]: